92 research outputs found

    Evaluation of Handwriting Similarities Using Hermite Transform

    Get PDF
    http://www.suvisoft.comIn this paper, we present a new method for handwriting documents denoising and indexing. This work is based on the Hermite Transform, which is a polynomial transform and a good model of the human visual system (HVS). We use this transformation to analyze handwritings using their visual aspect of texture. We apply this analysis to document indexing (finding documents coming from the same author) or document classification (grouping document containing handwritings that have similar visual aspect). It is often necessary to clean these documents before the analyze step. For that purpose, we use also the Hermite decomposition. The current results are very promising and show that it is possible to characterize handwritten drawings without any a priori graphemes segmentation

    A new approach for centerline extraction in handwritten strokes: an application to the constitution of a code book

    Get PDF
    International audienceWe present in this paper a new method of analysis and decomposition of handwritten documents into glyphs (graphemes) and their associated code book. The different techniques that are involved in this paper are inspired by image processing methods in a large sense and mathematical models implying graph coloring. Our approaches provide firstly a rapid and detailed characterization of handwritten shapes based on dynamic tracking of the handwriting (curvature, thickness, direction, etc.) and also a very efficient analysis method for the categorization of basic shapes (graphemes). The tools that we have produced enable paleographers to study quickly and more accurately a large volume of manuscripts and to extract a large number of characteristics that are specific to an individual or an era

    AUTOMATIC BLOB FITTING IN COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY IMAGES

    Get PDF
    Two-dimensional gas chromatography is a recent technology which is particularly efficient for detailed molecular analysis. However, due to the novelty of the method and the lack of automated analysis tools, quantitative data processing is often performed manually. Hence, results are strongly user-dependent, time consuming and, consequently, relatively inaccurate In this paper, we extend conventional techniques for signal analysis by utilizing specific characteristics of chromatographic data and by developing new methods for estimating the quantitative contribution of chemical regions from the produced images. Data-driven information is retrieved from chemical quantitative analysis based on Savitzky-Golay automatic peaks location determination, which increases both the processing speed and the analysis efficiency and improves our confidence in experimental repeatability. 1

    Analysis and interpretation of visual saliency for document functional labeling

    No full text
    International audienceIn this paper we propose a complete methodologyof printed text characterization for document labelingusing texture features that have been inspired bya psychovisual approach. This approach considers visualhuman-based predicates to describe and identify textunits according to their visual saliency and their perceptualattraction power on the reader’s eye. It supportsa quick and robust process of functional labeling usedto characterize text regions of document pages. The testdatabases are the Finland MTDB Oulu base1 that providesa great panel of document layouts and contentsand our laboratory corpus that contains a large varietyof composite documents (about 200 pages). The performanceof the method gives very promising results

    Localisation and augmented reality for mobile applications in cultural heritage

    No full text
    International audienceAbstract to comeRésumé à veni

    Suivi et amélioration de textes issus de génériques vidéos

    No full text
    National audienceCet article traite de l’extraction automatique d’information dans les documents audio visuel. La quantité de documents de ce type qui sont produits et archivés chaque jour est en effet considérable. Pour pouvoir exploiter les archives colossales ainsi créées, il faut être capable de savoir ce qui est stocké, et où le retrouver. Pour cela, il faut créer des indexes les plus précis possibles. Les volumes de données à traiter montrent clairement l’intérêt d’une indexation automatique et donc d’une extraction automatique d’informations. L’information particulière traitée ici est le texte contenu dans les images et plus particulièrement les textes en mouvements de translation des génériques de films ou vidéos. Ces textes sont localisés dans les images successives, puis intégrés pour en améliorer la qualité, avant de les transmettre à un OCR chargé de les reconnaître. Cela permet d’obtenir des informations très difficiles à obtenir autrement

    Visual Information and Information Systems: 8th International Conference, VISUAL 2005, Amsterdam, The Netherlands, July 5, 2005, Revised Selected Papers

    No full text
    National audienceVisual Information Systems on the Move Following the success of previous International Conferences of VISual Information Systems held in Melbourne, San Diego, Amsterdam, Lyon, Taiwan, Miami, and San Francisco, the 8th International Conference on VISual Information Systems held in Amsterdam dealt with a variety of aspects, from visual systems of multimedia information, to systems of visual information such as image databases. Handling of visual information is boosted by the rapid increase of hardware and Internet capabilities. Now, advances in sensors have turned all kinds of information into digital form. Technology for visual information systems is more urgently needed than ever before. What is needed are new computational methods to index, compress, retrieve and discover pictorial information, new algorithms for the archival of and access to very large amounts of digital images and videos, and new systems with friendly visual interfaces. Visual information processing, features extraction and aggregation at semantic level and content-based retrieval, and the study of user intention in query processing will continue to be areas of great interest. As digital content becomes widespread, issues of delivery and consumption of multimedia content were also topics of this workshop. Be on the move… June 2005 Stéphane Bres Robert Laurin

    Les filtres de Hermite et de Gabor donnent-ils des modèles équivalents du système visuel humain?

    No full text
    National audienceReceptive field profile models have played an important role in image processing since they efficiently code visual information relevant to human perception. Among the suggested mathematical models, the Gabor model is well known and widely used. A less used known model is the one of Hermite which is based on analysis filters of the Hermite transform created by J.-B. Martens. It agrees with the Gaussian derivative model for human vision developed by Richard A. Young. However, it is not as known as the Gabor one although it has the advantage of an orthogonal basis and a better fit to cortical data. In this paper we present an analytical comparison based on minimization of the energy error between the two models, and so the optimal parameters letting the two models be close to each other are found. The results show that both models are equivalent. As a matter of fact, we can implement a Hermite filter with an equivalent Gabor filter and vice versa, provided that conditions leading to error minimization are held. The results also show that both models extract at least about the same frequency information. We hope to provide a framework in which applications using the Gabor model could be extended to that of Hermite.Les modèles des profils de champs réceptifs du système visuel humain ont joué un rôle important dans le traitement d'images car ils codent efficacement l'information visuelle pertinente pour la perception humaine. Parmi les modèles mathématiques proposés il existe celui de Gabor, qui est le plus souvent utilisé et le plus connu. Il existe aussi celui de Hermite basé sur les filtres d'analyse de la transformée de Hermite crée par J.-B. Martens. Ce modèle est en accord avec le modèle de la vision humaine en dérivées de Gaussiennes développé par Richard A. Young. Cependant, il n'est pas autant connu que celui de Gabor bien qu'il présente l'avantage de former une base orthogonale et d'avoir un meilleur appariement avec les données corticales. Dans cet article, nous présentons une comparaison analytique basée sur la minimisation de l'énergie de l'erreur entre les deux modèles, grâce à laquelle les paramètres optimaux permettant de rapprocher les deux modèles sont trouvés. Les résultats montrent que les deux modèles sont équivalents. En fait, nous pouvons implémenter un filtre de Hermite avec un filtre de Gabor équivalent et vice-versa pourvu que les conditions de minimisation de l'erreur soient maintenues. Le résultats permettent aussi de voir que les deux modèles extraient très sensiblement la même information fréquentielle. Nous espérons ainsi fournir un cadre dans lequel les applications utilisant le modèle de Gabor puissent s'ouvrir à celui de Hermite

    Utilisation des transformées polynomiales pour l'indexation de documents audiovisuels

    No full text
    National audienceCet article présente une méthode d'indexation des vidéos par combinaison d'informations spatiales et temporelles, extraites séparément des séquences vidéo. Nous utilisons pour ce faire deux familles de filtres qui sont de bons modèles du système visuel humain, à la fois du point de vue de la perception spatiale mais aussi temporelle. Ces filtres sont construits sur des transformées polynomiales qui réalisent une décomposition locale des signaux. Dans la version spatiale, nous utilisons les polynômes de Hermite (en accord avec les modèles de dérivées de Gaussienne) et les polynômes de Laguerre dans la version temporelle, qui préservent la caractéristique de la causalité. Par intégration des deux modèles, nous construisons un système d'indexation spatio-temporel. Une implémentation efficace est réalisée grâce aux versions discrètes de ces polynômes : les polynômes de Krawtchouk et de Meixner

    Detection of Interest Points for Image Indexation

    No full text
    This paper addresses the problem of detection and delineation of interest points in images as part of an automatic image and video indexing for search by content purposes project. We propose a novel key point detector based on multiresolution contrast information. We compare this detector to the Plessey feature point detector as well as the detector introduced in the SUSAN project. As we are interested in common database applications, we focus this comparison on robustness vs coding noise like Jpeg noise
    • …
    corecore